![]() SYSTEM FOR MONITORING THE STATUS OF VIGILANCE OF AN OPERATOR
专利摘要:
The present invention relates to a system for monitoring the state of alertness of an operator (2) comprising: - a camera (1) equipped with a sensitive sensor in the near infrared, - a circuit for processing the signals in real time delivered by said camera (1), for determining characteristic points for each of said images and, by the analysis of said characteristic points, information relating to at least part of the indicators comprising: ○ the inclination of the head in three orthogonal directions , ○ the position of the pupil ○ the opening of the eye ○ the configuration of the mouth - a computer controlled by a program determining the state of alertness according to said indicators and their temporal evolution. 公开号:FR3038770A1 申请号:FR1556584 申请日:2015-07-10 公开日:2017-01-13 发明作者:Patrice Lacroix;Jimmy Seng 申请人:Innov Plus; IPC主号:
专利说明:
Title: Surveillance System for the State of Vigilance of an Operator Field of the invention The present invention relates to the field of automatic analysis of the state of alertness of an operator, in particular a driver of a motor vehicle, rail, sea or air, or an operator driving or monitoring equipment or an industrial, marine or air site. Among the various known solutions, the invention relates more particularly to those based on facial analysis to detect changes representative of a change in the state of alertness or the appearance of precursors signs of drowsiness. State of the art In the state of the art, various solutions for the automatic analysis of vigilance are known from facial recognition. US Patent 6927694 describes a method for detecting the alertness and alertness of people under conditions of fatigue, lack of sleep, and exposure to psychotropic substances such as alcohol and drugs. In particular, the invention may have particular applications for truckers, bus drivers, train operators, pilots and controllers of marine gear and stationary heavy equipment operators, and students and employees during , either by day or in nocturnal conditions. The invention analyzes the position of a person's head and facial features with a single camera on board with a fully automatic system, which can classify the rotation of the head, detect the opening and closing of the eye and from the mouth, detect blinking of the eyes. The outputs can be visual and sound to directly alarm the driver. The patent FR2773521 describes a device which implements an optoelectronic sensor assembly and an electronic unit disposed inside the motor vehicle, the sensor being oriented towards the driver's head in place therein at the same time as the rearview mirror. interior which has a mirror without a mirror behind which is disposed the sensor. They realize, after detection of the presence of a driver in place in the vehicle, the framing first of the entire face, and then the eyes, of it in the frames of the video signal output by the sensor, thanks to the electronic unit, and then the determination of the successive times of the blinks of the eyes, the latter being compared to a threshold between such a duration for an awake person and such a duration for a sleepy person and a signal (emitted by an alarm) apt to awaken the driver being triggered when the duration of his blinks exceeds said threshold. The patent application US20140375785 relates to an example of a system that can monitor the stress and fatigue of a subject. The system may include a light source configured to direct a lighting light onto a face of the subject. The illuminating light can reflect the subject's face to form a reflected light. The system may include an image processor configured to locate an eye in the video-speed images, extract signs of fatigue from the localized eye, and determine a level of subject fatigue from, in part, signs of fatigue . The image processor may also be configured to locate a facial region remote from the eye in video-speed images, extract signs of stress from the localized facial region, and determine a subject stress level from the signs of stress. Disadvantages of prior art The solutions proposed in the prior art present a major problem for implementation in on-board equipment with limited processing capacity. The real-time processing of images having a high resolution to enable the extraction of useful information according to the methods of the prior art requires a large computing power, not very compatible with processors such as those that can be found on a cell phone or on-board equipment. Moreover, the solutions of the prior art are not very robust with respect to the direction of image capture: when the operator turns his head and is no longer in front of the acquisition camera, the treatments of analysis lose their effectiveness. The solutions of the prior art are very sensitive to the precise positioning of the camera with respect to the operator. Another disadvantage of the solutions of the prior art is the difficulty of adapting and optimizing the treatment to the specificities of a given operator. Even if the loss of vigilance indices can be classified generically, a given operator may have significant differences and present peculiarities with regard to his precursors signs of falling asleep or decreased alertness. Finally, the solutions of the prior art are limited to a one-person treatment, and do not allow to share general information on the risk of drowsiness or loss of vigilance as a function of time or geolocation. Solution provided by the invention In order to remedy these drawbacks, the present invention relates, in its most general sense, to a monitoring system of the state of vigilance of an operator comprising: a camera provided with a sensitive sensor in the near infrared, oriented to acquire a image of the face occupying at least 12% of the effective surface of the sensor, a real-time processing circuit of the signals delivered by said camera, for determining characteristic points for each of said images and, by the analysis of said characteristic points, information relating to at least a part of the indicators comprising: o the inclination of the head in three orthogonal directions, o the position of the pupil o the opening of the eye o the configuration of the mouth a computer controlled by a program determining the state of vigilance according to said indicators and their temporal evolution said system comprises: o a first permanent memory for recording a plurality of FD ± files obtained by pre-processing on an image set IA ± and of qualifying the membership of each of said VA ± images to a predetermined class [true face, face which is not a] o a second permanent memory for the recording of a plurality of FC ± files obtained by prior processing on a set of face images V ± associated with annotations said processing circuit performing a step of locating, in the digital image delivered by the camera, areas corresponding to the face, by applying a detection process from said files IA ± said processing to determine characteristic points by applying a detection process to from said files V ± the information relating to the state of the head of said operator comprises indicators of inclination of the head in three orthogonal directions ales. According to a variant, said computer is further controlled by a program for detecting the direction of gaze and the calculation of an additional indicator. Preferably, the acquisition and processing frequency is greater than 30 frames per second. According to an advantageous embodiment, a time-stamped recording of said indicators is carried out and in that the temporal evolution is calculated according to said recordings over a time span of at least two seconds. According to a variant, it comprises an alert means remotely controlled by said computer, activating a haptic means. According to a particular embodiment, the system comprises an alert means controlled by said computer, activating a sound means. According to another particular embodiment, the system comprises an alert means controlled by said computer, activating a light means. According to another particular embodiment, the system further comprises environmental sensors not linked to said operator delivering an additional signal for the calculation of the state of vigilance. Advantageously, the system further comprises at least one physiological sensor linked to said operator delivering an additional signal for calculating the state of alertness. According to a variant, it further comprises means for transmitting said indicators to a server, and means for calculating generic information [not linked to a given operator] on risk areas. Advantageously, it further comprises a third memory for recording data from a plurality of external equipment and a server, for recording additional information for calculating the state of vigilance. According to a particular mode of implementation, the server is configured to receive further external data and to transmit additional information to each local equipment. Preferably, it further comprises means for transmitting to a server of at least a portion of the images acquired by the camera, in order to complete the FC ± files. According to a preferred embodiment, the system further comprises a source emitting in the near infrared to illuminate the face of the operator. According to another advantageous variant, the system further comprises an electronic toll circuit. Detailed description of a non-limiting embodiment of the invention The present invention will be better understood on reading the description which follows, relating to a non-limitative embodiment with reference to the accompanying drawings in which: FIG. 1 represents a schematic view of the hardware architecture of the equipment; FIG. schematic view of the installation of the equipment in a passenger compartment FIG. 3 represents a schematic view of the functional architecture of the equipment; FIG. 4 represents a schematic view of the nature of the morphological characteristics detected on a face; FIG. a schematic view of the different movements analyzed. Hardware architecture FIG. 1 represents a schematic view of the hardware architecture of an on-board equipment that can be used in the driving position of a motor vehicle or a coach or truck, and FIG. 2 an example of installation in a habitable space. a vehicle. The equipment (19) comprises an acquisition camera (1) positioned in the passenger compartment of the vehicle, facing the driver (2), with a slight shift so as not to mask the field of vision. It can in particular be maintained by an arm fixed on the dashboard or on the windshield. This camera (1) is mounted in a housing also comprising a light source (3) emitting a near infra-red wave emitting at 850 nm. The emission power is typically 100 mW per steradian (and preferably between 50 and 200 mW / str). The IR emission is continuous, or can be modulated according to the ambient lighting, the acquisition frequency. The acquisition sensor of the camera (1) has a resolution of at least 540x480 (VGA type) at an acquisition rate of 100 images per second. It is associated with a high-pass filter with a cut-off wavelength between 800 and 830 nm. The optics of the camera is determined, for an application embedded in a driving station, to provide a field of view of 80x80cm at a distance of one meter. The camera (1) can be associated with a brightness sensor (16) and a motorized support for controlling an automatic tracking of the movements of the face. The camera (1) is connected to a computer (4) comprising a microprocessor (5). This microprocessor (5) integrates a multicore computing unit (6) and a graphics processor (7), ROM (8) and RAM or flash (9) memories for storing intermediate data and a connector (19). ) to receive a flash memory in which the code of the application program is stored. It also integrates input / output interfaces, power management, and protocol and security layers. The equipment also comprises control buttons (17) and a display screen (10) forming the human-machine interface allowing the operator to make the settings, and also intended to display graphical alerts generated by the computer ( 4) as well as graphic representations symbolizing the state and level of vigilance. An external light source, for example a light strip or a flash (14), controlled remotely by the equipment, to complete the visual information. Optionally, the box controls the display of a type of alert associated with the descriptors that led to the triggering of this alert, for example a symbolic graphic signal of risk of falling asleep, associated with textual or vocal information on the number of closing or blinking or yawning. The box can trigger, during the initialization, an informative sequence, possibly personalized according to the historical data associated with the operator, time and / or location, or external data (temperature, weather, ...). The equipment (1) also comprises a loudspeaker (11) or an interface controlling the vehicle sound equipment and a Bluetooth-type interface (12) for transmitting control signals to a vehicle. bracelet or connected vibrating object (13) for transmitting to the operator haptic alerts, in the form of vibrations. The equipment further comprises inputs for additional sensors (15), for example a sound environment sensor, an ambient temperature sensor, an accelerometer, a gyroscope, GPS location and speed signals provided by a GPS, etc. The equipment also comprises radio communication means (18) for transmitting data to a server, and receiving external data, according to a known protocol. The power supply is made by connecting to an accessory socket of the vehicle. Functional description Figure 3 shows a general view of the functional architecture. It comprises a first hardware block (20) relating to the management of the equipment (1) and its components. The second block (21) corresponds to the processing of the video stream coming from the camera for the extraction of morphological characteristics CM ± static. These processes are performed on images in 256 gray levels (8 bits). The third block (22) corresponds to the treatments applied to the morphological characteristics CM + static to calculate dynamic indicators related to the face. The fourth block (23) corresponds to the treatments applied to these dynamic indicators as well as to additional data coming from internal sensors and optionally from a server. The fifth block (24) corresponds to the processing performed by a server based on the data from a plurality of equipment to provide general data and improve the patterns recorded in each equipment. The operation of the equipment is as follows. The equipment (1) is arranged facing the operator. The activation of the equipment leads in a known manner to the automatic loading of the program recorded in the flash memory (19). The execution of the program triggers a first sequence of verification and dialogue with the operator. During this sequence, the equipment (1) transmits voice and text information, for example the recommendations and safety instructions for the use of the system. It allows you to ask the operator to formally accept the use, in order to comply with local regulations, especially regarding issues of responsibility or protection of personal data. This sequence also makes it possible to check the operation of the vibrating, sound and visual interfaces, to select the ones that the operator wishes to activate, and to make the settings, for example, of the level or the mode of these alerts for the journey. After this initialization sequence, the equipment carries out various treatments in real time. Face Detection - Block (21) This treatment concerns the recognition, in the images acquired at a frequency of 100 images per second, of the face in the optical field, for two purposes which will be developed hereafter: the search in a historical base of the data associated with a recognized face the morphological analysis treatments for calculating the state of alertness. The detection of the face in the image acquired by the camera is carried out according to the Viola-Jones method described for example in the article Paul Viola and Michael Jones, "Robust Real-time Face Detection", IJCV, 2004, p. 137-154. This step makes it possible to isolate the area containing the face in the entire image. This area represents a subset of at least 12% of the entire image in the optical field, which reduces the computational power required for subsequent processing. The next step is to apply treatments to compute static morphological features of the face in the area delimited previously. These morphological characteristics CM ± illustrated in FIG. 4 comprise, for example: the characteristic points of the eye and the eyelid, for each of the eyes: o center of the upper-outer part of the right eyelid (30) o center of the upper-inner part of the right eyelid (34) o outer corner of the right eye (32) o inner corner of the right eye (33) o center of the lower-inner part of the right eyelid (31) o center of the pupil of the right eye (35) o and various other characteristic points of the eye the characteristic points of the mouth: o the right corner of the outside of the lip (36) o the left corner of the outside of the lip (37) where the uppermost point of the right outside of the lip (38) o the highest point of the left outside of the lip (39) o the center point of the outside of the lip upper lip (40) o the inflection point of the right part of the lower lip (41) o the spike t of inflexion of the left part of the lower lip (42) o the right corner of the inside of the lip (43) o the left corner of the inside of the lip (44) o the central point on the part upper lip inside (45) o the central point on the lower part of the inner lip (46) o and if necessary other characteristic points of the lips ... the characteristic points of the nose: o the base of the partition inter-nostrils of the nose (47) o the center of the right nostril (48) o the center of the left nostril (49). Construction of a deformable model The processing carried out in the block (21) is carried out according to models previously generated and stored in the ROM (8). An example of treatment for the construction of the deformable model will be presented further. A model is generated from a sequence of a plurality of face images, on which a manual characterization was performed. This characterization consists in designating, on each of the images, manually or automatically, the characteristic points CMi referred to above. We thus construct a set of data (CMi, xcmi, yCmi) consisting of coordinates (xcmi, ycmi) associated with their qualifier CM ±. From these data, we construct a shape model constituting a reference template. For each of the characteristic points, one defines on this template the distribution of the points resulting from the different faces treated and the limit of the variations. The recognition of these characteristic points is carried out by an analysis in principal components ACP, by a comparison processing between the CMRi characteristics on a reference model, and the offsets between each of the points CM ±. The steps leading to obtaining a previously mentioned model correspond to a known method, for example in the article "Active Appearance Models Revisited" by Iain Matthews and Simon Baker. The treatment is based on the exploitation of several models of faces, whose shooting is not constant but varies according to the orientation of the face with respect to the camera. After generating the model, at each point of the CM + set is associated a reference position and the deviations, vis-à-vis this position, authorized for the set represents a plausible human face. Thumbnails associated with the deformable model This deformable model is completed by vignettes corresponding to localized textures around the characteristic points representative of the neighborhood of these points. This information makes it possible to validate the areas of interest in which the characteristic points are located. A deformable model associated with the CM characteristic points is thus constructed ± a vignette of a small number of pixels surrounding these characteristic points. Baseline Several deformable models are generated according to the different variations on the plurality of exploitable images (great diversity in the poses and expressions of the faces). These models are stored in a reference database that will be used for real-time processing of images from the equipment. (1) · Registered models include: a face reference model a left profile quarter reference model a model of right profile quarter reference a left profile reference model a right profile reference model Each model has its own set of thumbnails associated with it. Initializing Face Detection During the first acquisitions, the process will select from the reference model database one of the models that is the most suitable. This selection is performed by calculating a match score between an image acquired by the equipment (1) and the reference models recorded in the database, to determine the reference model for which this score is maximized. The calculator performs facial identification processing to search for usage history. Depending on the identity recognized, the computer loads the parameters associated with the recognized profile. If no face is recognized, the calculator loads standard parameters, built by local or extended learning (from data from a server). Processing on each images transmitted by the equipment For an It image, a selection is made of the area containing the graphic information corresponding to the face, and a processing is applied to identify the CMit feature points from the data of the selected reference model. The extraction of these CMit characteristic points is carried out recurrently, with a first step of rough characterization then additional steps of fine characterization and validation of the identified points. The CMit feature points are tracked based on their deformable pattern, their own vignette, and the points at time t-1 CMi ^. The deformable model makes it possible to place the points at medium positions and to limit their deformations within the limit of a plausible face shape. The thumbnails are used to determine the precise location of the associated feature point on the image at time t. The exact position results from a calculation of correlation between said vignette and an area of the image close to the characteristic point. The additional steps consist in verifying the coherence of the relative position of pairs of characteristic points, validating the points whose coherence indicator reaches a threshold value and restoring the residual points. The result of this step is a set of CMit timestamped points for the image processed, associated with: an indicator of the degree of confidence, representative of the reality of the qualification and the position of the characteristic point optionally associated with geolocation data from a GPS. This processing is performed in parallel, for all the characteristic points, for example 56 points, by an execution on the GPU coupled processor (7). Calculation of static indicators These data are processed to calculate a representative indicator: - From the position of the head - From the Pt head pose (static orientation in 3 axes pitch, roll, yaw) as shown in Figure 5 according to CMi points; t with the highest confidence score, as well as the history of the pose of the head to verify the consistency of the pose Pt. The pose Pt of the head is calculated according to a three-dimensional model of a face, whose vertices are associated with the 2D points of the reference model. The three-dimensional model is generated upstream and is non-deformable. This model is constituted by a rigid mesh, constituted by a plurality of groups of points (for example the points of the mesh of the jaw, or eyes defining the width and the spacing of the eyes). The positions of the points of a group are calculated during initialization to adapt a predefined generic mesh to the morphology determined during the first image acquisitions of the operator, to adapt this standard mesh to the morphology of a particular operator. . This solution significantly reduces the computing power required compared to solutions where the three-dimensional model is recalculated for each new image acquisition of the face. This personalized model can be updated periodically, at a period much higher than that of the acquisition, for example every minute or ten minutes while the periodicity of acquisition of images of the order of a hundredth of a second. The recalculation of the three-dimensional model can be activated during the repeated detection of discrepancies between the 2D points and the points of the 3D mesh projected on the 2D image. This recalculation makes it possible to adapt the three-dimensional model during a long journey to the morphology of the driver. Rear-projection of 2D to 3D points using intrinsic parameters (for example radial distortions) and extrinsic parameters (for example rotation matrices, translation and scaling) of the acquisition system makes it possible to control the properties of the optical. Another static treatment concerns the determination of an indicator representative of the state of each eye (for example the percentage of opening of the eye), as well as an indicator representative of the position of the pupil. This treatment consists of performing an image pre-treatment in the area of interest around the eye; to model the eye and to predict its movements and its position with respect to the characteristic points (right, left, up, down, ...). Another static treatment concerns the determination of an indicator representative of the direction of gaze. This treatment lies in the calculation of the coupling between the position of the head and the direction of the gaze. Another static treatment concerns the determination of an indicator representative of the state of the mouth. This treatment involves isolating part of the image in the area of interest around the mouth and then assigning a status classification of the mouth: neutral - discussion - yawning. These different static indicators are recorded in a time-stamped form and, where appropriate, geolocated in a local database, and are transmitted periodically to the server. Calculation of dynamic indicators of the temporal evolution of static indicators These static indicators are processed to calculate dynamic indicators of temporal evolution, for example: o Number of flashes of the eyelids o Closing time of the eyelids o Closing time and / or opening of the mouth o Distribution of look times by zone categories Calculation of one or more qualitative or quantitative indicators of the state of vigilance-block (23) Static indicators, including head positioning, as well as dynamic indicators, are used periodically to calculate quantitative and / or qualitative indicators representative of the state of alertness or sleepiness (micro-sleep, distraction, etc.), and control the triggering of alarm and display if threshold values are exceeded. These indicators are also transmitted, in a time-stamped and geolocated form, to the server, to allow global processing of information transmitted by several devices, during extended time periods, and to provide data such as periods or zones generating levels of data. loss of vigilance of atypical somnolence. The equipment also controls the display of the evolution of the state of vigilance on a display screen. This visualized information can be supplemented by external data from the server, for a prediction of areas of loss of vigilance according to the data of the server. External data is also used to parameterize alerts and algorithms for calculating indicators, for example to adjust the level of sensitivity or the frequency of treatment. Processing by the server The data coming from several embedded systems can be collected periodically by a server, asynchronously for example, or as a set of data comprising, for a sampling of acquisition time: the characteristic points calculated at the output of the block (21). ) from the acquired by the camera, the data transmitted by the additional sensors (GPS, accelerometer, brightness, temperature, ...) the time stamping of these data and the association of an identifier of the operator optionally the data the camera acquired, and from which the characteristic points were calculated. These data sets are exploited on the server for three purposes: the optimization of three-dimensional models and calculation models of the computational variables the constitution of a baseline of the precursors signs of drowsiness or hypovigilance the constitution of a reference base of temporal and geographical zones where vigilance losses occur repeatedly. This information is retransmitted periodically to the various on-board equipment: to update the models and calculation variables to enrich the data processed in particular geographic areas and time periods, to trigger alerts and / or to parameterize the processing, including the sensitivity of the data. treatments. This information can also be transmitted to fixed equipment, for example traffic signs, whose state varies according to the data transmitted by the server, and where appropriate the proximity of onboard equipment and transmitted signals. Learning the model The formation of the deformable model is carried out at an initial stage from a fixed image collection of faces, taken with people of different morphologies, under different shooting conditions, and different orientations. From these images, one carries out an automatic or manual pointing, for each of these images, of each of the recognized characteristic points, of the position of the head and the expression, to build a learning base. This data set is then subjected to a principal component analysis statistical processing to construct an average template associated with given standard deviation modes with a given standard deviation to provide a numerical model for the automatic determination of characteristic points and head from an unknown image. Multifunction box According to a particular implementation, the system according to the invention is integrated in a single box further comprising a toll highway circuit. This case is fixed on the windshield in the central zone, or the sail of the passenger compartment, in the cone of detection of electronic toll equipment. This position is particularly suitable for the acquisition of the face, because the space separating the thus positioned housing of the driver is devoid of elements likely to mask the optical field. Such dual-purpose housing improves the safety of the operator but also other users, offering the possibility of encouraging a driver whose system to record an abnormally high rate of loss of alertness signals to rest or not to cross a new section of traffic.
权利要求:
Claims (15) [1" id="c-fr-0001] claims A system for monitoring the state of alertness of an operator (2) comprising: a camera (1) equipped with a sensitive sensor in the near infrared, oriented to acquire a facial image occupying at least 12% of the sensor effective surface, a real-time processing circuit of the signals delivered by said camera (1), for determining characteristic points for each of said images and, by the analysis of said characteristic points, information relating to at least a part of the indicators comprising: o the inclination of the head in three orthogonal directions, o the position of the pupil o the opening of the eye o the configuration of the mouth a computer controlled by a program determining the state of alertness according to said indicators and their temporal evolution Characterized in that said system comprises: a first permanent memory for the recording of a plurality of FD ± files obtained by a tra beforehand on a set of image IA ± and of indicator of qualification of the belonging of each of said images VA ± to a predetermined class [real face, face which is not one] o a second permanent memory for l recording a plurality of FC files obtained by prior processing on a set of images of faces V associated with annotations, said processing circuit performing a step of locating, in the digital image delivered by the camera, areas corresponding to the face, by applying a detection process from said IA files ± said processing to determine feature points by applying a detection process from said files V ± information about the state of the head said operator (2) comprises inclination indicators of the head in three orthogonal directions. [0002] 2 - monitoring system of the state of vigilance of an operator according to claim 1 characterized in that said computer is further controlled by a program for detecting the direction of gaze and the calculation of an additional indicator. [0003] 3 - monitoring system of the state of alertness of an operator according to claim 1 or 2 characterized in that the acquisition and processing frequency is greater than 30 frames per second. [0004] 4 - System for monitoring the state of vigilance of an operator according to at least one of claims 1 to 3 characterized in that one proceeds to a time stamped recording of said indicators and in that the time evolution is calculated according to said records over a time range of at least two seconds. [0005] 5 - monitoring system of the state of alertness of an operator according to at least one of the preceding claims characterized in that it comprises an alert means remotely controlled by said computer, activating a haptic means. [6" id="c-fr-0006] 6 - System for monitoring the state of alertness of an operator according to at least one of the preceding claims characterized in that it comprises an alert means controlled by said computer, activating a sound means. [7" id="c-fr-0007] 7 - System for monitoring the state of alertness of an operator according to at least one of the preceding claims characterized in that it comprises an alert means controlled by said computer, activating a light means. [8" id="c-fr-0008] 8 - System for monitoring the state of alertness of an operator according to at least one of the preceding claims characterized in that it further comprises environmental sensors not related to said operator delivering an additional signal for the calculation of the state of vigilance. [9" id="c-fr-0009] 9 - System for monitoring the state of alertness of an operator according to at least one of the preceding claims characterized in that it further comprises at least one physiological sensor related to said operator delivering an additional signal for the calculation of the state of alertness. [10" id="c-fr-0010] 10 - System for monitoring the state of alertness of an operator according to at least one of the preceding claims, characterized in that it further comprises means for transmitting said indicators to a server, and means for calculating generic information [not linked to a given operator] on risk areas. [11" id="c-fr-0011] 11 - System for monitoring the state of vigilance of an operator according to at least one of the preceding claims characterized in that it further comprises a third memory for recording data from a plurality of equipment and a server, for the recording of additional information for the calculation of the state of vigilance. [12" id="c-fr-0012] 12 - System for monitoring the state of vigilance of an operator according to claim 10 or 11 characterized in that the server is configured to receive further external data and to transmit additional information to each local equipment. [13" id="c-fr-0013] 13 - System for monitoring the state of vigilance of an operator according to at least one of the preceding claims characterized in that it further comprises means for transmitting to a server of at least a portion of the images acquired by the camera, in order to complete the FC ± files [14" id="c-fr-0014] 14 - System for monitoring the state of alertness of an operator according to at least one of the preceding claims characterized in that it further comprises a source emitting in the near infrared to illuminate the face of the operator. [15" id="c-fr-0015] 15 - System for monitoring the state of alertness of an operator according to at least one of the preceding claims characterized in that it further comprises an electronic toll circuit.
类似技术:
公开号 | 公开日 | 专利标题 FR3038770A1|2017-01-13|SYSTEM FOR MONITORING THE STATUS OF VIGILANCE OF AN OPERATOR US20160046298A1|2016-02-18|Detection of driver behaviors using in-vehicle systems and methods US20180365533A1|2018-12-20|System and method for contextualized vehicle operation determination EP0527665B1|1995-08-23|Board device and method for detecting and following the position of a vehicle on the road and driving-aid device applying the method WO2017207925A1|2017-12-07|Connected device for behavioural monitoring of an individual and for detecting and/or preventing an anomaly CA2320815A1|1999-07-22|Method and device for detecting drowsiness and preventing a driver of a motor vehicle from falling asleep FR2734701A1|1996-12-06|SYSTEM FOR MONITORING THE EYES TO DETECT SLEEPING BEHAVIOR EP2307948B1|2012-10-03|Interactive device and method of use EP3265334B1|2019-05-01|Device and method for predicting a vigilance level of a driver of a motor vehicle US10521675B2|2019-12-31|Systems and methods of legibly capturing vehicle markings EP3424030A1|2019-01-09|Personalized device and method for monitoring a motor vehicle driver KR102051136B1|2020-01-08|Artificial intelligence dashboard robot base on cloud server for recognizing states of a user WO2017149046A1|2017-09-08|Device and method for monitoring a driver of a transport vehicle FR3063703A1|2018-09-14|METHOD FOR AIDING THE DRIVING OF A RAILWAY VEHICLE AND RAILWAY VEHICLE EQUIPPED WITH A SUPERVISION SYSTEM FOR IMPLEMENTING SAID METHOD EP3274809A1|2018-01-31|Control method, control device, system and motor vehicle comprising such a control device US11281944B2|2022-03-22|System and method for contextualized vehicle operation determination EP3529099B1|2021-03-24|Method and system for controlling a use of a vehicle by a driver EP3866064A1|2021-08-18|Method for authentication or identification of an individual EP3783532A1|2021-02-24|Device for detecting persons in a drowning situation or a situation with risk of drowning FR3088458A1|2020-05-15|METHOD FOR CONTEXTUALLY RECOGNIZING AND DESCRIBING AN OBJECT OF INTEREST FOR A VISUALLY DEFICIENT USER, DEVICE IMPLEMENTING SAID METHOD P Mathai2021|A New Proposal for Smartphone-Based Drowsiness Detection and Warning System for Automotive Drivers FR3066460A1|2018-11-23|DEVICE FOR MONITORING A DRIVER WITH A CORRECTION MODULE BASED ON A PREDETERMINED REPETITIVE MOTION FR3057516A1|2018-04-20|DEVICE FOR PREVENTING DANGEROUS SITUATIONS FOR A CONDUCTOR OF A TRANSPORT VEHICLE AND ASSOCIATED METHOD FR3101593A1|2021-04-09|Determination of a state of density of road traffic FR3057517A1|2018-04-20|DEVICE FOR PREVENTING DANGEROUS SITUATIONS FOR A CONDUCTOR OF A TRANSPORT VEHICLE AND ASSOCIATED METHOD
同族专利:
公开号 | 公开日 FR3038770B1|2021-03-19| EP3320527A1|2018-05-16| WO2017009560A1|2017-01-19| US20180204078A1|2018-07-19|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US5293427A|1990-12-14|1994-03-08|Nissan Motor Company, Ltd.|Eye position detecting system and method therefor| EP2060993A1|2007-11-13|2009-05-20|Delphi Technologies, Inc.|An awareness detection system and method| US20120242819A1|2011-03-25|2012-09-27|Tk Holdings Inc.|System and method for determining driver alertness| WO2003065084A1|2002-01-31|2003-08-07|Donnelly Corporation|Vehicle accessory module| US7526103B2|2004-04-15|2009-04-28|Donnelly Corporation|Imaging system for vehicle| US10074024B2|2010-06-07|2018-09-11|Affectiva, Inc.|Mental state analysis using blink rate for vehicles| US20150294169A1|2014-04-10|2015-10-15|Magna Electronics Inc.|Vehicle vision system with driver monitoring| DE102014011264A1|2014-07-28|2016-01-28|GM Global Technology Operations LLC |Method for calculating a return time| WO2018085804A1|2016-11-07|2018-05-11|Nauto Global Limited|System and method for driver distraction determination| US20190092337A1|2017-09-22|2019-03-28|Aurora Flight Sciences Corporation|System for Monitoring an Operator| US11017249B2|2018-01-29|2021-05-25|Futurewei Technologies, Inc.|Primary preview region and gaze based driver distraction detection|US10962780B2|2015-10-26|2021-03-30|Microsoft Technology Licensing, Llc|Remote rendering for virtual images| US11144756B2|2016-04-07|2021-10-12|Seeing Machines Limited|Method and system of distinguishing between a glance event and an eye closure event| WO2017190798A1|2016-05-06|2017-11-09|Telefonaktiebolaget Lm Ericsson |Dynamic load calculation for server selection| CN110855934A|2018-08-21|2020-02-28|北京嘀嘀无限科技发展有限公司|Fatigue driving identification method, device and system, vehicle-mounted terminal and server| JP2020175791A|2019-04-19|2020-10-29|矢崎総業株式会社|Lighting control system and lighting control method| CN110213548B|2019-07-01|2021-09-07|南京派光智慧感知信息技术有限公司|Rail train driver behavior comprehensive monitoring and warning method|
法律状态:
2016-07-04| PLFP| Fee payment|Year of fee payment: 2 | 2017-01-13| PLSC| Publication of the preliminary search report|Effective date: 20170113 | 2017-07-31| PLFP| Fee payment|Year of fee payment: 3 | 2018-07-24| PLFP| Fee payment|Year of fee payment: 4 | 2019-07-11| PLFP| Fee payment|Year of fee payment: 5 | 2020-07-16| PLFP| Fee payment|Year of fee payment: 6 | 2021-07-30| PLFP| Fee payment|Year of fee payment: 7 |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 FR1556584A|FR3038770B1|2015-07-10|2015-07-10|OPERATOR VIGILANCE MONITORING SYSTEM|FR1556584A| FR3038770B1|2015-07-10|2015-07-10|OPERATOR VIGILANCE MONITORING SYSTEM| US15/742,820| US20180204078A1|2015-07-10|2016-07-08|System for monitoring the state of vigilance of an operator| PCT/FR2016/051756| WO2017009560A1|2015-07-10|2016-07-08|System for monitoring the state of vigilance of an operator| EP16744813.3A| EP3320527A1|2015-07-10|2016-07-08|System for monitoring the state of vigilance of an operator| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|